📚 node [[gradient_descent|gradient descent]]
Welcome! Nobody has contributed anything to 'gradient_descent|gradient descent' yet. You can:
  • Write something in the document below!
    • There is at least one public document in every node in the Agora. Whatever you write in it will be integrated and made available for the next visitor to read and edit.
  • Write to the Agora from social media.
    • If you follow Agora bot on a supported platform and include the wikilink [[gradient_descent|gradient descent]] in a post, the Agora will link it here and optionally integrate your writing.
  • Sign up as a full Agora user.
    • As a full user you will be able to contribute your personal notes and resources directly to this knowledge commons. Some setup required :)
⥅ related node [[gradient_descent]]
⥅ related node [[mini batch_stochastic_gradient_descent_(sgd)]]
⥅ related node [[stochastic_gradient_descent_(sgd)]]
⥅ node [[gradient_descent]] pulled by Agora

gradient descent

Go back to the [[AI Glossary]]

A technique to minimize loss by computing the gradients of loss with respect to the model's parameters, conditioned on training data. Informally, gradient descent iteratively adjusts parameters, gradually finding the best combination of weights and bias to minimize loss.

📖 stoas
⥱ context